Inter-Task Effects Induce Bias in Crowdsourcing
نویسندگان
چکیده
Microtask platforms allow researchers to engage participants quickly and inexpensively. Workers on such platforms probably perform many tasks in succession, so we investigate interactions between earlier tasks and later ones, which we call inter-task effects. Existing research investigates many task design factors, such as framing, on the quality of responses, but to our knowledge, does not address inter-task effects. We used a canonical image-labeling task on Amazon Mechanical Turk to measure the impact of intertask and framing effects on the focus and specificity of labels that workers provide. We found that inter-task effects had a much stronger impact than framing, and that workers provided more specific labels when labeling a series of images that were similar to one another.
منابع مشابه
Crowd Prefers the Middle Path: A New IAA Metric for Crowdsourcing Reveals Turker Biases in Query Segmentation
Query segmentation, like text chunking, is the first step towards query understanding. In this study, we explore the effectiveness of crowdsourcing for this task. Through carefully designed control experiments and Inter Annotator Agreement metrics for analysis of experimental data, we show that crowdsourcing may not be a suitable approach for query segmentation because the crowd seems to have a...
متن کاملPerform Three Data Mining Tasks with Crowdsourcing Process
For data mining studies, because of the complexity of doing feature selection process in tasks by hand, we need to send some of labeling to the workers with crowdsourcing activities. The process of outsourcing data mining tasks to users is often handled by software systems without enough knowledge of the age or geography of the users' residence. Uncertainty about the performance of virtual user...
متن کاملEvaluating Complex Task through Crowdsourcing: Multiple Views Approach
With the popularity of massive open online courses (MOOCs), grading through crowdsourcing has become a prevalent approach towards large scale classes. However, for getting grades for complex tasks, which require specific skills and efforts for grading, crowdsourcing encounters a restriction of insufficient knowledge of the workers from the crowd. Due to knowledge limitation of the crowd graders...
متن کاملCrowdsourcing a Subjective Labeling Task: A Human-Centered Framework to Ensure Reliable Results
How can we best use crowdsourcing to perform a subjective labeling task with low inter-rater agreement? We have developed a framework for debugging this type of subjective judgment task, and for improving label quality before the crowdsourcing task is run at scale. Our framework alternately varies characteristics of the work, assesses the reliability of the workers, and strives to improve task ...
متن کاملInterpreting Ambiguous Social Situations in Social Anxiety: Application of Computerized Task Measuring Interpretation Bias
Background and Aims: The interpretation bias which is an important factor in the pathology of social anxiety disorder, has been recently considered in therapeutic approaches. Given the importance of interpretation bias in the treatment of social anxiety, and despite the ambiguity in the relationship between social anxiety and interpretation bias, we compared the interpretation bias in individua...
متن کامل